\”It\’s much easier to build an AI system that can detect a nipple than it is to determine what is linguistically hate speech,\” he said, when asked about inappropriate content on the world\’s largest social network.
His comment inspired a string of jokes, but Zuckerberg was making a serious point. Abuse on Facebook takes different shapes and forms — from nudity to racial slurs to scams and drug listings — and getting rid of all of it is not a one-size-fits-all proposition. Whenever Zuckerberg talks about cleansing Facebook of inappropriate content, he always mentions two things:
1) Facebook will hire 20,000 content moderators by the end of the year to find and review objectionable material.
2) The company is investing in artificial intelligence tools to proactively detect abusive posts and take them down.
On Wednesday, during its F8 developers conference in San Jose, California, Facebook revealed for the first time exactly how it uses its AI tools for content moderation. The bottom line is that automated AI tools help mainly in seven areas: nudity, graphic violence, terrorist content, hate speech, spam, fake accounts and suicide prevention.
Now Playing:Watch this: Facebook building AI tools to protect election integrity
For things like nudity and graphic violence, problematic posts are detected by technology called \”computer vision,\” software that\’s trained to flag the content because of certain elements in the image. Sometimes that graphic content is taken down, and sometimes it\’s put behind a warning screen.
Something like hate speech is harder to police solely with AI because there are often different intents behind that speech. It can be sarcastic or self-referential, or it may try to raise awareness about hate speech. It\’s also harder to detect hate speech in languages that are less widely spoken, because the software has fewer examples to lean on.
\”We have a lot of work ahead of us,\” Guy Rosen, vice president of product management, said in an interview last week. \”The goal will be to get to this content before anyone can see it.\”
Falling through the cracks
Facebook is opening up about its AI tools after Zuckerberg and his team were slammed for a scandal last month involving Cambridge Analytica. The digital consultancy accessed personal data on up to 87 million Facebook users and used it without their permission. The controversy has prompted questions about Facebook\’s policies, including what responsibilities it has in policing the content on its platform and to the more than 2.2 billion users who log into Facebook each month.
But even with thousands of moderators and AI tools, objectionable content still falls through the cracks. For example, Facebook\’s AI is used to detect fake accounts, but bots and scammers still exist on the platform. The New York Times reported last week that fake accounts pretending to be Zuckerberg and Facebook COO Sheryl Sandberg are being used to try to scam people out of their cash.
And when Zuckerberg testified before Congress last month, lawmakers repeatedly asked about decision making for policing content. Rep. David McKinley, a Republican from West Virginia, mentioned illegal listings for opioids posted on Facebook, and asked why they hadn\’t been taken down. Other Republican lawmakers asked why the social network removed posts by Diamond and Silk, two African-American supporters of President Donald Trump with 1.6 million Facebook followers. In 10 hours of testimony over two days, Zuckerberg, 33, tried to convince legislators that Facebook had a handle — and a process in place — for handling these kinds of issues.
This content was originally published here.